context function
- North America > United States (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.94)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.94)
Normalizing flows for lattice gauge theory in arbitrary space-time dimension
Abbott, Ryan, Albergo, Michael S., Botev, Aleksandar, Boyda, Denis, Cranmer, Kyle, Hackett, Daniel C., Kanwar, Gurtej, Matthews, Alexander G. D. G., Racanière, Sébastien, Razavi, Ali, Rezende, Danilo J., Romero-López, Fernando, Shanahan, Phiala E., Urban, Julian M.
Applications of normalizing flows to the sampling of field configurations in lattice gauge theory have so far been explored almost exclusively in two space-time dimensions. We report new algorithmic developments of gauge-equivariant flow architectures facilitating the generalization to higher-dimensional lattice geometries. Specifically, we discuss masked autoregressive transformations with tractable and unbiased Jacobian determinants, a key ingredient for scalable and asymptotically exact flow-based sampling algorithms. For concreteness, results from a proof-of-principle application to SU(3) lattice gauge theory in four space-time dimensions are reported.
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
Gaussian Gated Linear Networks
Budden, David, Marblestone, Adam, Sezener, Eren, Lattimore, Tor, Wayne, Greg, Veness, Joel
We propose the Gaussian Gated Linear Network (G-GLN), an extension to the recently proposed GLN family of deep neural networks. Instead of using backpropagation to learn features, GLNs have a distributed and local credit assignment mechanism based on optimizing a convex objective. This gives rise to many desirable properties including universality, data-efficient online learning, trivial interpretability and robustness to catastrophic forgetting. We extend the GLN framework from classification to multiple regression and density modelling by generalizing geometric mixing to a product of Gaussian densities. The G-GLN achieves competitive or state-of-the-art performance on several univariate and multivariate regression benchmarks, and we demonstrate its applicability to practical tasks including online contextual bandits and density estimation via denoising.
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Gated Linear Networks
Veness, Joel, Lattimore, Tor, Bhoopchand, Avishkar, Budden, David, Mattern, Christopher, Grabska-Barwinska, Agnieszka, Toth, Peter, Schmitt, Simon, Hutter, Marcus
This paper presents a family of backpropagation-free neural architectures, Gated Linear Networks (GLNs),that are well suited to online learning applications where sample efficiency is of paramount importance. The impressive empirical performance of these architectures has long been known within the data compression community, but a theoretically satisfying explanation as to how and why they perform so well has proven difficult. What distinguishes these architectures from other neural systems is the distributed and local nature of their credit assignment mechanism; each neuron directly predicts the target and has its own set of hard-gated weights that are locally adapted via online convex optimization. By providing an interpretation, generalization and subsequent theoretical analysis, we show that sufficiently large GLNs are universal in a strong sense: not only can they model any compactly supported, continuous density function to arbitrary accuracy, but that any choice of no-regret online convex optimization technique will provably converge to the correct solution with enough data. Empirically we show a collection of single-pass learning results on established machine learning benchmarks that are competitive with results obtained with general purpose batch learning techniques.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Florida > Broward County > Fort Lauderdale (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (3 more...)
Neural Processes Mixed-Effect Models for Deep Normative Modeling of Clinical Neuroimaging Data
Kia, Seyed Mostafa, Marquand, Andre F.
Normative modeling has recently been introduced as a promising approach for modeling variation of neuroimaging measures across individuals in order to derive biomarkers of psychiatric disorders. Current implementations rely on Gaussian process regression, which provides coherent estimates of uncertainty needed for the method but also suffers from drawbacks including poor scaling to large datasets and a reliance on fixed parametric kernels. In this paper, we propose a deep normative modeling framework based on neural processes (NPs) to solve these problems. To achieve this, we define a stochastic process formulation for mixed-effect models and show how NPs can be adopted for spatially structured mixed-effect modeling of neuroimaging data. This enables us to learn optimal feature representations and covariance structure for the random-effect and noise via global latent variables. In this scheme, predictive uncertainty can be approximated by sampling from the distribution of these global latent variables. On a publicly available clinical fMRI dataset, we compare the novelty detection performance of multivariate normative models estimated by the proposed NP approach to a baseline multi-task Gaussian process regression approach and show substantial improvements for certain diagnostic problems.
- Europe (1.00)
- North America > United States > New York (0.14)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
Improving Sequence-to-Sequence Constituency Parsing
Liu, Lemao (Tencent AI Lab) | Zhu, Muhua (Tencent AI Lab) | Shi, Shuming
Sequence-to-sequence constituency parsing casts the tree structured prediction problem as a general sequential problem by top-down tree linearization,and thus it is very easy to train in parallel with distributed facilities. Despite its success, it relies on a probabilistic attention mechanism for a general purpose, which can not guarantee the selected context to be informative in the specific parsing scenario. Previous work introduced a deterministic attention to select the informative context for sequence-to-sequence parsing, but it is based on the bottom-up linearization even if it was observed that top-down linearization is better than bottom-up linearization for standard sequence-to-sequence constituency parsing. In this paper, we thereby extend the deterministic attention to directly conduct on the top-down tree linearization. Intensive experiments show that our parser delivers substantial improvements over the bottom-up linearization in accuracy, and it achieves 92.3 Fscore on the Penn English Treebank section 23 and 85.4 Fscore on the Penn Chinese Treebank test dataset, without reranking or semi-supervised training.
Enhancing the Context-Enhanced Additive Heuristic with Precedence Constraints
Cai, Dunbo (Jilin University) | Hoffmann, Joerg (SAP Research) | Helmert, Malte (Albert-Ludwigs-Universitaet Freiburg)
Recently, Helmert and Geffner proposed the context-enhanced additive heuristic, where fact costs are evaluated relative to context states that arise from achieving first a pivot condition of each operator. As Helmert and Geffner pointed out, the method can be generalized to consider contexts arising from arbitrary precedence constraints over operator conditions instead. Herein, we provide such a generalization. We extend Helmert and Geffner's equations, and discuss a number of design choices that arise. Drawing on previous work on goal orderings, we design a family of methods for automatically generating precedence constraints. We run large-scale experiments, showing that the technique can help significantly, depending on the choice of precedence constraints. We shed some light on this by profiling the behavior of all possible precedence constraints, using a sampling technique.
- Europe > Germany > Baden-Württemberg > Freiburg (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Asia > China > Jilin Province > Changchun (0.04)